Product Code Database
Example Keywords: trousers -sony $9
barcode-scavenger
   » » Wiki: Ilya Sutskever
Tag Wiki 'Ilya Sutskever'.
Tag

Ilya Sutskever (; born 8 December 1986) is an Israeli-Canadian computer scientist who specializes in . He has made several major contributions to the field of . With and , he co-invented , a convolutional neural network.

Sutskever co-founded and was a former chief scientist at . In 2023, he was one of the members of OpenAI's board that ousted Sam Altman from his position as the organization's CEO; Altman was reinstated a week later, and Sutskever stepped down from the board. In June 2024, Sutskever co-founded the company Safe Superintelligence alongside Daniel Gross and Daniel Levy.


Early life and education
Sutskever was born into a family in , Russia (then Gorky, ). At the age of 5, he made with his family and lived in , until he was 16, when his family moved to Canada. Sutskever attended the Open University of Israel from 2000 to 2002. After moving to Canada, he attended the University of Toronto in .

At the University of Toronto, Sutskever received a Bachelor of Science in in 2005, a Master of Science in in 2007, and a Doctor of Philosophy in computer science in 2013. His doctoral advisor was Geoffrey Hinton.

In 2012, Sutskever built in collaboration with Geoffrey Hinton and . To support AlexNet's computing demands, he bought many GTX 580 GPUs online.


Career and research
In 2012, Sutskever spent about two months as a with at Stanford University. He then returned to the University of Toronto and joined Hinton's new research company DNNResearch, a spinoff of Hinton's research group. In 2013, Google acquired DNNResearch and hired Sutskever as a research scientist at .

At Google Brain, Sutskever worked with and Quoc Viet Le to create the sequence-to-sequence learning algorithm, and worked on . He is also one of the paper's many co-authors.

At the end of 2015, Sutskever left Google to become cofounder and chief scientist of the newly founded organization .

In 2022, Sutskever tweeted, "it may be that today's large neural networks are slightly conscious", which triggered debates about . He is considered to have played a key role in the development of . In 2023, he announced that he would co-lead OpenAI's new "Superalignment" project, which is trying to solve the of superintelligences within four years. He wrote that even if superintelligence seems far off, it could happen this decade.

Sutskever was formerly one of the six board members of the nonprofit entity that controls OpenAI. On November 17, 2023, the board fired Sam Altman, saying that "he was not consistently candid in his communications with the board". The Information speculated that the decision was partly driven by conflict over the extent to which the company should commit to . In an all-hands company meeting shortly after the board meeting, Sutskever said that firing Altman was "the board doing its duty", but the next week, he expressed regret at having participated in Altman's ouster. Altman's firing and OpenAI's co-founder resignation led three senior researchers to resign from OpenAI. After that, Sutskever stepped down from the OpenAI board and was absent from OpenAI's office. Some sources suggested he was leading the team remotely, while others said he no longer had access to the team's work.

In May 2024, Sutskever announced his departure from OpenAI to focus on a new project that was "very personally meaningful" to him. His decision followed a turbulent period at OpenAI marked by leadership crises and internal debates about the direction of AI development and protocols. , the other leader of the superalignment project, announced his departure hours later, citing an erosion of safety and trust in OpenAI's leadership.

In June 2024, Sutskever announced Safe Superintelligence Inc., a new company he founded with Daniel Gross and Daniel Levy with offices in and . In contrast to OpenAI, which releases revenue-generating products, Sutskever said the new company's "first product will be the safe superintelligence, and it will not do anything else up until then". In September 2024, the company announced that it had raised $1 billion from venture capital firms including Andreessen Horowitz, , , and . In March 2025, Safe Superintelligence Inc. raised $2 billion more and reportedly reached a $32 billion valuation, notably due to Sutskever's reputation.

In an October 2024 interview after winning the Nobel Prize in Physics, expressed support for Sutskever's decision to fire Altman, emphasizing concerns about AI safety.


Awards and honors
  • In 2015, Sutskever was named in MIT Technology Review's 35 Innovators Under 35.
  • In 2018, he was the keynote speaker at Ntech 2018 and AI Frontiers Conference 2018.
  • In 2022, he was elected a Fellow of the Royal Society (FRS).
  • In 2023 and 2024, included in Time's list of the 100 most influential people in AI

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs